30 research outputs found

    Developing an Embedded Model for Test suite prioritization process to optimize consistency rules for inconsistencies detection and model changes

    Get PDF
    Software form typically contains a lot of contradiction and uniformity checkers help engineers find them. Even if engineers are willing to tolerate inconsistencies, they are better off knowing about their existence to avoid follow-on errors and unnecessary rework. However, current approaches do not detect or track inconsistencies fast enough. This paper presents an automated approach for detecting and tracking inconsistencies in real time (while the model changes). Engineers only need to define consistency rules-in any language-and our approach automatically identifies how model changes affect these consistency rules. It does this by observing the behavior of consistency rules to understand how they affect the model. The approach is quick, correct, scalable, fully automated, and easy to use as it does not require any special skills from the engineers using it. We use this model to define generic prioritization criteria that are applicable to GUI, Web applications and Embedded Model. We evolve the model and use it to develop a unified theory. Within the context of this model, we develop and empirically evaluate several prioritization criteria and apply them to four stand-alone GUI and three Web-based applications, their existing test suites and mainly embedded systems. In this model we only run our data collection and test suite prioritization process on seven programs and their existing test suites. An experiment that would be more readily generalized would include multiple programs of different sizes and from different domains. We may conduct additional empirical studies with larger EDS to address this threat each test case has a uniform cost of running (processor time) monitoring (human time); these assumptions may not hold in practice. Second, we assume that each fault contributes uniformly to the overall cost, which again may not hold in practice

    Protocol-specific and sensor network-inherited attack detection in IoT using machine learning

    Get PDF
    For networks with limited resources, such as IoT-enabled smart homes, smart industrial equipment, and urban infrastructures, the Routing Protocol for Low-power and Lossy Networks (RPL) was developed. Additionally, a number of optimizations have been suggested for its application in other contexts, such as smart hospitals, etc. Although these networks offer efficient routing, the lack of active security features in RPL makes them vulnerable to attacks. The types of attacks include protocol-specific ones and those inherited by wireless sensor networks. They have been addressed by a number of different proposals, many of which have achieved substantial prominence. However, concurrent handling of both types of attacks is not considered while developing a machine-learning-based attack detection model. Therefore, the ProSenAD model is proposed for addressing the identified gap. Multiclass classification has been used to optimize the light gradient boosting machine model for the detection of protocol-specific rank attacks and sensor network-inherited wormhole attacks. The proposed model is evaluated in two different scenarios considering the number of attacks and the benchmarks for comparison in each scenario. The evaluation results demonstrate that the proposed model outperforms with respect to the metrics including accuracy, precision, recall, Cohen’s Kappa, cross entropy, and the Matthews correlation coefficient

    Improving Fuzzy Algorithms for Automatic Magnetic Resonance Image Segmentation

    No full text
    Abstract: In this paper, we present reliable algorithms for fuzzy k-means and C-means that could improve MRI segmentation. Since the k-means or FCM method aims to minimize the sum of squared distances from all points to their cluster centers, this should result in compact clusters. Therefore the distance of the points from their cluster centre is used to determine whether the clusters are compact. For this purpose, we use the intra-cluster distance measure, which is simply the median distance between a point and its cluster centre. The intra-cluster is used to give us the ideal number of clusters automatically; i.e a centre of the first cluster is used to estimate the second cluster, while an intra-cluster of the second cluster is obtained. Similar, the third cluster is estimated based on the second cluster information (centre and intra cluster), so on, and only stop when the intra-cluster is smaller than a prescribe value. The proposed algorithms are evaluated and compared with established fuzzy k-means and C-means methods by applying them on simulated volumetric MRI and real MRI data to prove their efficiency. These evaluations, which are not easy to specify in the absence of any prior knowledge about resulted clusters, for real MRI dataset are judged visually by specialists since a real MRI dataset cannot give us a quantitative measure about how much they are successful

    Software effort estimation by tuning COOCMO model parameters using differential evolution

    No full text
    Abstract- Accurate estimation of software projects costs represents a challenge for many government organizations such as the Department of Defenses (DOD) and NASA. Statistical models considerably used to assist in such a computation. There is still an urgent need on finding a mathematical model which can provide an accurate relationship between the software project effort/cost and the cost drivers. A powerful algorithm which can optimize such a relationship via tuning mathematical model parameters is urgently needed. In [1) two new model structures to estimate the effort required for software projects using Genetic Algorithms (GAs) were proposed as a modifica tion to the famous COnstructive COst MOdel (COCOMO). In this paper, we follow up on our previous work and present Differential Evolution (DE) as an alternative technique to estimate the COCOMO model parameters. The performance of the developed models were tested on NASA software project dataset provided in (2). The developed COCOMO-DE model was able to provide good estimation capabilities. I
    corecore